Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Knowledge Distillation Pretraining
# Knowledge Distillation Pretraining
Miniplm Qwen 200M
Apache-2.0
A 200M-parameter model based on the Qwen architecture, pretrained from scratch using the MiniPLM knowledge distillation framework
Large Language Model
Transformers
English
M
MiniLLM
203
5
Featured Recommended AI Models
Empowering the Future, Your AI Solution Knowledge Base
English
简体中文
繁體中文
にほんご
© 2025
AIbase